
ELM training for Physics-Informed Neural Networks
Please login to view abstract download link
Architectures that randomly transform their inputs have been resurfacing in the Machine-Learning community in the past few years, largely motivated by the fact that randomization of the nonlinearities and linear optimization of the last layer is computationally cheaper than nonlinear optimization of all the parameters. On top of that, results on spaces of random functions and random matrix theory show that training a shallow architecture by randomly choosing the nonlinearities in the first layer and computing the remaining parameters solving a linear least squares problem results in an approximation that is not much worse than the one constructed by optimally tuning the nonlinearities. One example of this training strategies is Extreme Learning Machine, which will be the main focus of the talk.